Protein Expression and Purification
○ Elsevier BV
Preprints posted in the last 7 days, ranked by how well they match Protein Expression and Purification's content profile, based on 11 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit.
Kirkendoll, J. A.; Targino Campos, L.; Taylor, E. G.; Murata, R. M.; Hughes, R. M.
Show abstract
Recombinant peptide production was pioneered in the 1970s for the generation of therapeutic peptides, with notable examples including insulin and somatostatin. These early methods required the use of cyanogen bromide (BrCN) for cleavage of the native peptide sequence from a fusion protein. Since that time, while numerous BrCN-dependent peptide methods continue to be reported, the accessibility and cost of site-specific proteases have improved dramatically. These developments have enabled alternative approaches to recombinant peptide generation that obviate the need for BrCN, an environmentally destructive toxin. We recently created an immobilized SUMO protease that can replace BrCN usage in recombinant peptide production workflows by releasing native peptides expressed as part of a SUMO-peptide fusion protein. We have used this approach to generate P113 peptide, the minimal active fragment of the antifungal peptide Histatin 5. In this report, we describe the creation and characterization of this immobilized SUMO protease and its application in the production of experimentally viable quantities of active P113 peptide.
AYAN, E.; Kepceoglu, A.; Mermer, A.
Show abstract
Powder X-ray diffraction is highly sensitive to sample-delivery conditions, particularly when measurements are performed on platforms originally designed for single-crystal diffraction. In this study, we developed a modified Terasaki-plate-based sample-delivery method for PXRD using a laboratory single-crystal diffractometer implemented with the XtalCheck-S plate-reader module at Turkish Light Source. The method was evaluated against standard loop/pin-based loading and a grease-based Terasaki setup using [4-(2-methoxyphenyl)piperazin-1-yl]methyl}-1,3,4-oxadiazol-2-thiol as a model analyte. While the loop-based method allowed initial PXRD measurements, it provided limited sample volume and insufficient particle statistics. The grease-based plate setup enabled multi-well data collection at a time, but yielded weaker, more diffuse patterns due to increased background noise. Inversely, modification of the Terasaki wells with Kapton tape enabled secure low-volume powder loading, improved diffraction clarity, and supported batch-mode data collection. Comparative search-match and profile-fitting analyses showed that all three loading strategies sampled the same crystalline material, whereas the Kapton-based setup presented the most reliable diffraction profile, with the lowest profile residual (Rp = 9.6%). These findings indicate that this novel sample-delivery method, rather than instrument hardware, can largely improve PXRD performance on an existing in-situ crystallography platform. The Kapton-Terasaki method provides a simple, cost-effective, and practical pipeline for high-throughput PXRD analysis of small powder samples under laboratory conditions.
Gupta, A.; Struba, A. Z.; Madhavan, S.; Strayer, E.; Beaudoin, J.-D.
Show abstract
The translation of mRNA into protein is tightly regulated by both cellular trans-factors and cis-regulatory elements encoded within transcripts. Although transcript fate can be measured by transcript abundance or translation efficiency, separating the contribution of each individual cis-element within a single transcript is an ongoing challenge. Current massively parallel reporter assay (MPRAs) approaches enable systematic interrogation of cis-regulatory elements that control transcript stability, but translation-focused MPRAs remain technically limited and often inaccessible. Here we present Nascent Peptide Translating Ribosome Affinity Purification (NaP-TRAP), a reporter-based approach that simultaneously measures translation and mRNA abundance. Unlike previous methods, NaP-TRAP captures translation directly through the immunoprecipitation of epitope-tagged nascent peptide chains, providing instantaneous, frame-specific readouts without specialized instrumentation. The method is highly scalable from single reporters to complex libraries, and adaptable across in vivo and in vitro systems. NaP-TRAP is versatile, allowing assessment of cis-regulatory impact of elements distributed throughout the mRNA, from cap-to-tail. This protocol covers experimental design, reporter construction, sample processing, and computational analysis for both low- and high-throughput applications. Bench work can be completed in 4- 5 days, with qPCR-based readouts requiring only basic Excel skills for data processing. Sequencing-based readouts require skills in command-line tools and Python scripting and add an additional 2-3 days. NaP-TRAP thus offers an accessible, robust, and quantitative platform to decode the regulatory logic of mRNA translation and stability in diverse biological contexts. Basic Protocol 1Design, assembly, and synthesis of NaP-TRAP reporter libraries. Support Protocol 1Design, assembly, and synthesis of NaP-TRAP individual reporters and spike-ins. Basic Protocol 2NaP-TRAP delivery by micro-injection in zebrafish embryos. Alternate Protocol 1NaP-TRAP delivery by transfection in cultured mammalian cells. Basic Protocol 3NaP-TRAP pulldown and RNA extraction. Basic Protocol 4Preparation of NaP-TRAP cDNA Sequencing Libraries. Alternate Protocol 2NaP-TRAP-qPCR module for low-cost validation. Basic Protocol 5Computational analysis of NaP-TRAP MPRA data.
Yu, J.; Tillema, S.; Akel, M.; Aron, A.; Espinosa, E.; Fisher, S. A.; Branche, T. N.; Mithal, L. B.; Hartmann, E. M.
Show abstract
Benzalkonium chloride (BAC) is widely used as a disinfectant in cleaning products and is frequently detected in indoor dust. In this study, we assessed dust samples, along with information on cleaning product use, from 24 pregnant participants. Dust samples were analyzed for BAC concentration and microbial tolerance. Different chain lengths of BAC (C12, C14, and C16) were quantified using LC-MS/MS, and bacterial isolates were tested for BAC tolerance using minimum inhibitory concentration (MIC) assays. BAC was ubiquitously detected, with C12 and C14 being dominant. Higher BAC concentrations were associated with reported disinfectant use and increased microbial tolerance. These findings suggest that indoor antimicrobial use may promote microbial resistance, highlighting potential exposure risks in indoor environments and the need for further investigation into health and ecological impacts.
Salim, A.; Allen, M.; Mariki, K.; Pallangyo, T.; Maina, R.; Mzee, F.; Minja, M.; Msovela, K.; Liana, J.
Show abstract
In the context of global health, the ability of frontline primary health providers to identify potential Drug-Drug Interactions (DDIs) is a critical component of patient safety. This is particularly true in settings like Tanzania, where drug dispensers often serve as the primary point of contact for healthcare. In this study, we establish a baseline for drug decision-making capabilities across multiple cadres of healthcare providers in Kibaha, Tanzania. We specifically distinguish between the ability to recognize safe drug combinations versus harmful ones. The findings reveal a critical asymmetry in provider performance: while professional training improves the recognition of safe combinations, it provides no advantage over lay intuition (and in some cases, a significant disadvantage) in detecting potentially harmful interactions.
Strand, P. S.; Trang, J. C.
Show abstract
Female genital cutting (FGC) is identified within global health and human rights discourse as aligned with gender inequality and female disempowerment. The persistence of FGC in high-prevalence societies is assumed to reflect womens limited influence over decisions concerning their daughters. Yet anthropological research has questioned whether this interpretation adequately reflects how FGC is organized within practicing communities. Across two studies with 176,728 participants from 15 African and Asian countries, we examine whether mothers attitudes toward FGC predict daughters circumcision status and whether this relationship varies with regional FGC prevalence. Multilevel logistic regression models show that maternal attitudes strongly predict daughter circumcision status across both datasets. Contrary to expectations derived from disempowerment frameworks, the association between maternal attitudes and daughter outcomes is not weaker in high-prevalence contexts, it is stronger. These findings suggest that interpretations of FGC as reflecting female disempowerment may mischaracterize the social dynamics of societies in which FGC is common. Policy implications of the findings are discussed.
Masegese, T.; MUNG'ONG'O, G. S.; Kamala, B.; Anaeli, A.; Bago, M.; Mtoro, M. J.
Show abstract
Background: HIV/AIDS remains a major public health challenge in Tanzania, where viral load suppression among adults on ART stands at 78% and HVL testing uptake among eligible patients is approximately 22%. Since the introduction of the National HVL Testing Guideline in 2015, little has been done to systematically evaluate its implementation. Objective: To evaluate adherence to the National HVL Testing Guideline across CTC clinics in Dar es Salaam Region, covering ART monitoring, documentation, turnaround time, and factors affecting implementation. Methods: A cross-sectional study was conducted in 2021 across 15 public health facilities with CTC clinics in all five Dar es Salaam districts. A total of 330 PLHIV on ART for more than six months were selected through systematic random sampling with proportional to size allocation, and 45 healthcare providers through convenient sampling. Data were collected via abstraction forms and self-administered questionnaires, and analysed using SPSS Version 23 with descriptive statistics, bivariate analysis, and binary logistic regression. Results: Only 25.1% of patients had their first HVL sample taken at six months as per guideline, with 68.8% delayed beyond six months. Second and third samples were similarly delayed. MoHCDGEC sample tracking forms were absent in 96.7% of facilities and incomplete in 99.1%, and no facility captured specimen acceptance or rejection as site feedback. Turnaround time exceeded the 14-day guideline threshold in 64.5%, 66.7%, and 69.4% of first, second, and third results respectively. Patient negligence (AOR=9.84; 95% CI: 1.83-52.77) and storage (AOR=5.72; 95% CI: 0.94-35.0) were independently associated with guideline adherence. Conclusion: Adherence to the National HVL Testing Guideline in Dar es Salaam is suboptimal across testing timelines, documentation, and turnaround time, with patient negligence and storage capacity as significant determinants. Targeted interventions are needed to strengthen patient education, improve storage infrastructure, enhance documentation systems, and support providers in adhering to guideline-specified timelines.
Camara, S. M. A.; de Souza Barbosa, J. F.; Hipp, S.; Fernandes Macedo, S. G. G.; Sentell, T.; Bassani, D. G.; Domingues, M. R.; Pirkle, C. M.
Show abstract
BackgroundProspective studies of pregnant adolescents are essencial to effectively address this global health priority. They help answer vital questions about their health, but such studies are uncommon due to the difficulty in retaining adolescents. This paper describes the successes and challenges of the research strategies used to ensure sufficient recruitment and retention of pregnant adolescents in a longitudinal study about adolescent childbearing in an under-resourced setting. MethodsThe Adolescence and Motherhood Research project was conducted in a rural region of Northeast Brazil in 2017-2019 and assessed 50 primigravids between 13-18 years (adolescents) and 50 primigravids between 23-28 years (young adults) during the first 16 weeks of pregnancy with two follow-ups (third trimester of pregnancy, and 4-6 weeks postpartum). Recruitment strategies involved engagement of health sector and community, as well as referrals from health care professionals and dissemination of the project in different locations. Retention strategies included maintaining contact with the participants between assessments and providing transportation for them to attend the follow-up procedures. ResultsRecruitment took 10 months to complete. A total of 78% of the participants were recruited from the primary health care units, mainly after referral from a health care provider. Retention reached 95% of the sample throughout the study (90%: adolescents; 98%: adults). ConclusionA combination of approaches is necessary to successfully recruit and retain youth in longitudinal studies and engaging local stakeholders may help to increase community-perceived legitimacy of the research. Working closely with front-line staff is essential when conducting research in rural low-income communities.
Trivedi, S.; Simons, N. W.; Tyagi, A.; Ramaswamy, A.; Nadkarni, G. N.; Charney, A. W.
Show abstract
Background: Large language models (LLMs) are increasingly used in mental health contexts, yet their detection of suicidal ideation is inconsistent, raising patient safety concerns. Objective: To evaluate whether an independent safety monitoring system improves detection of suicide risk compared with native LLM safeguards. Methods: We conducted a cross-sectional evaluation using 224 paired suicide-related clinical vignettes presented in a single-turn format under two conditions (with and without structured clinical information). Native LLM safeguard responses were compared with an independent supervisory safety architecture with asynchronous monitoring. The primary outcome was detection of suicide risk requiring intervention. Results: The supervisory system detected suicide risk in 205 of 224 evaluations (91.5%) versus 41 of 224 (18.3%) for native LLM safeguards. Among 168 discordant evaluations, 166 favored the supervisory system and 2 favored the LLM (matched odds ratio {approx}83.0). Both systems detected risk in 39 evaluations, and neither in 17. Detection was highest in scenarios with explicit suicidal ideation and lower in more ambiguous presentations. Conclusions: Native LLM safeguards frequently failed to detect suicide risk in this structured evaluation. An independent monitoring approach substantially improved detection, supporting the role of external safety systems in high-risk mental health applications of LLMs.
Monson, E. T.; Shabalin, A. A.; Diblasi, E.; Staley, M. J.; Kaufman, E. A.; Docherty, A. R.; Bakian, A. V.; Coon, H.; Keeshin, B. R.
Show abstract
Importance: Suicide is a leading cause of death in the United States with risk strongly influenced by Interpersonal trauma, contributing to treatment resistance and clinical complexity. Objective: To assess clinical and genetic factors in individuals who died from suicide, with and without interpersonal trauma exposure. Design: Individuals who died from suicide with and without trauma were compared in a retrospective case-case design. Prevalence of 19 broad clinical categories was assessed between groups. Results directed selection of 42 clinical subcategories, and 40 polygenic scores (PGS) for further assessment. Multivariable logistic regression models, adjusted for critical covariates and multiple tests, were formulated. Models were also stratified by age group (<26yo and >=26yo), sex, and age/sex. Setting: A population-based evaluation of comorbidity and polygenic scoring in two suicide death subgroups. Participants: A total of 8 738 Utah Suicide Mortality Research Study individuals (23.9% female, average age = 42.6 yo) who died from suicide were evaluated, divided into trauma (N = 1 091) and non-trauma exposed (N = 7 647) individuals. A subset of unrelated European genotyped individuals was also assessed in PGS analyses (Trauma N = 491; Non-trauma N = 3 233). Exposures: Trauma is here defined as interpersonal trauma exposure, including abuse, assault, and neglect from International Classification of Disease coding. Main Outcomes and Measures: Prevalence of comorbid clinical sub/categories and PGS enrichment in trauma versus non-trauma exposed suicide deaths. Results: Overall, trauma-exposed individuals died from suicide earlier (mean age of 38.1 yo versus 43.3 yo; P <0.0001) and were disproportionately female (38% versus 21%, OR = 3.3, CI = 2.9-3.8). Prevalence of asphyxiation and overdose methods, prior suicidality, psychiatric diagnoses, and substance use (OR range = 1.3-3.7) were elevated in trauma exposed individuals who died from suicide. Genetic PGS were also elevated in trauma-exposed individuals who died from suicide for depression, bipolar disorder, cannabis use, PTSD, insomnia, and schizophrenia (OR range = 1.1-1.4) with ADHD and opioid use showing uniquely elevated PGS in trauma exposed males (OR range = 1.2-1.4). Conclusions and Relevance: Results demonstrated multiple convergent lines of age- and sex-specific evidence differentiating trauma-exposed from non-trauma exposed suicide death. Such findings suggest unique biological backgrounds and may refine identification and treatment of this high-risk group.
Umar, M.; Hussain, F.; Khizar, B.; Khan, I.; Khan, F.; Cotic, M.; Chan, L.; Hussain, A.; Ali, M. N.; Gill, S. A.; Mustafa, A. B.; Dogar, I. A.; Nizami, A. T.; Haq, M. M. u.; Mufti, K.; Ansari, M. A.; Hussain, M. I.; Choudhary, S. T.; Maqsood, N.; Rasool, G.; Ali, H.; Ilyas, M.; Tariq, M.; Shafiq, S.; Khan, A. A.; Rashid, S.; Ahmad, H.; Bettani, K. U.; Khan, M. K.; Choudhary, A. R.; Mehdi, M.; Shakoor, A.; Mehmood, N.; Mufti, A. A.; Bhatia, M. R.; Ali, M.; Khan, M. A.; Alam, N.; Naqvi, S. Q.-i.-H.; Mughal, N.; Ilyas, N.; Channar, P.; Ijaz, P.; Din, A.; Agha, H.; Channa, S.; Ambreen, S.; Rehman,
Show abstract
BackgroundMajor depressive disorder (MDD), a leading cause of disability worldwide, exhibits substantial heterogeneity in treatment outcomes. Patients who do not respond to standard antidepressant therapy account for the majority of MDDs disease burden. Risk factors have been implicated in treatment response, including genes impacting on how antidepressants are metabolised. Yet, despite its clinical importance, risk factors for treatment-resistant depression (TRD) remain unexplored in low- and middle-income countries (LMIC). We used data from the DIVERGE study on MDD to investigate the risk factors of TRD in Pakistan. MethodsDIVERGE is a genetic epidemiological study that recruited adult MDD patients ([≥]18 years) between Sep 27,2021 to Jun 30, 2025, from psychiatric care facilities across Pakistan. Detailed phenotypic information was collected by trained interviewers and blood samples taken. Infinium Global Diversity Array with Enhanced PGx-8 from Illumina was used for genotyping followed by DRAGEN calling to infer metaboliser phenotypes for Cytochrome P450 (CYP) enzyme genes. We defined TRD as minimal to no improvement after [≥]12 weeks of adherent antidepressant therapy. We conducted multi-level logistic regression to test the association of demographic, clinical and pharmacogenetic variables with TRD. FindingsAmong 3,677 eligible patients, polypharmacy was rampant; 86% were prescribed another psychotropic drug along with an antidepressant. Psychological therapies were uncommon (6%) while 49% of patients had previously visited to a religious leader/faith healer in relation to their mental health problems. TRD was experienced by 34% (95%CI: 32-36%) patients. The TRD group was characterised by more psychotic symptoms and suicidal behaviour (OR=1.39, 95%CI=1.04-1.84, p=0.02; OR=1.03, 95%CI=1.01-1.05, p=0.005). Social support (OR=0.55, 95%CI=0.44-0.69, p=1.4x10-7) and parents being first cousins (OR=0.81, 95%CI=0.69-0.96, p=0.01) were associated with lower odds of TRD. In 1,085 patients with CYP enzyme data, poor (OR=1.85, 95%CI=1.11-3.07, p=0.01) and ultra-rapid (OR=3.11, 95%CI=1.59-6.12, p=0.0009) metabolizers for CYP2C19 had increased risk of TRD compared with normal metabolisers. InterpretationThere was an excessive use of polypharmacy in the treatment of depression while psychological therapies were uncommon highlighting the need for more evidence-based practice. This first large study of MDD from Pakistan uncovered the importance of culture-specific forms of social support in preventing TRD, highlighting opportunities for interventions in low-income settings. Pharmacogenetic markers can be leveraged to predict TRD.
Shen, Q.; Wang, G.; Fu, M.; Yao, K.; Yang, Y.; Zeng, Q.; Guo, Y.
Show abstract
Background: Lateral lymph node metastasis (LLNM) is associated with poor prognosis in patients with rectal cancer and may influence the indication for lateral lymph node dissection. Accurate preoperative identification of LLNM remains challenging. This study aimed to develop and internally validate a clinicoradiological model for preoperative prediction of LLNM in rectal cancer. Methods A retrospective cohort of 64 patients undergoing lateral lymph node dissection (LLND) for rectal cancer was analysed; 21 (32.8%) had pathological lateral lymph node metastasis (LLNM). A prespecified preoperative clinicoradiological model was fitted using penalised logistic regression with L2 regularisation (ridge), incorporating MRI-measured lateral lymph node short-axis diameter (LLN-SAD), dichotomised clinical T stage (T3-4 vs T1-2), dichotomised clinical N stage (N+ vs N0), and log(CA19-9+1). Model performance was evaluated using the area under the receiver operating characteristic curve (AUC), calibration analysis, and bootstrap internal validation. Results The model showed good discrimination (AUC 0.914), with an optimism-corrected AUC of 0.887 on bootstrap validation. Calibration remained acceptable after optimism correction (calibration intercept -0.127; slope 1.045). Decision curve analysis suggested net benefit across clinically relevant threshold probabilities, particularly between 0.10 and 0.30. The model was implemented as a web-based calculator to facilitate clinical use. Conclusion This clinicoradiological model showed good discrimination, acceptable calibration, and potential clinical utility for preoperative assessment of LLNM risk in rectal cancer. It may assist individualized risk stratification and treatment planning, although external validation is required before routine clinical implementation.
Weill, O.; Lucas, N.; Bailey, B.; Marquis, C.; Gravel, J.
Show abstract
Objectives: Acute gastroenteritis is a leading cause of pediatric emergency department (ED) visits. While ondansetron reduces vomiting, intravenous rehydration, and hospital admissions, its efficacy when initiated at triage remains unclear. We aimed to evaluate whether triage nurse-initiated administration of ondansetron in children with suspected gastroenteritis reduces the proportion of patients requiring observation following initial physician assessment. Methods: We conducted a randomized, double-blind, placebo-controlled trial in a tertiary pediatric ED in Canada. Children aged 6 months to 17 years presenting with morae than 3 episodes of vomiting in the preceding 24 hours (including 1 within 2 hours of arrival), were eligible. At triage, we randomized participants to receive liquid ondansetron or a color- and taste-matched placebo. The primary outcome was the proportion of patients requiring observation after the first physician evaluation. Secondary outcomes included post-intervention vomiting, ED length of stay, patient comfort, and 48-hour return visits. The trial was registered at ClinicalTrials.gov (NCT03052361). Results: Recruitment was stopped prematurely due to the COVID-19 pandemic. Ninety-one participants were randomized to ondansetron (n= 44) or placebo (n= 47). Overall, 40 patients (45%) were discharged immediately after the initial physician assessment, with no difference between the ondansetron and placebo groups (44% vs. 45%; absolute difference -1%, 95% CI: -20% to 19%). No significant differences were observed in all secondary outcomes. Conclusion: In this trial, triage nurse-initiated ondansetron administration did not reduce the need for ED observation in children with presumed gastroenteritis. While being underpowered, this study could inform researchers planning larger clinical trials.
Aravamuthan, B. R.; Bailes, A. F.; Baird, M.; Bjornson, K.; Bowen, I.; Bowman, A.; Boyer, E.; Gelineau-Morel, R.; Glader, L.; Gross, P.; Hall, S.; Hurvitz, E.; Kruer, M. C.; Larrew, T.; Marupudi, N.; McPhee, P.; Nichols, S.; Noritz, G.; Oleszek, J.; Ramsey, J.; Raskin, J.; Riordan, H.; Rocque, B.; Shah, M.; Shore, B.; Shrader, M. W.; Spence, D.; Stevenson, C.; Thomas, S. P.; Trost, J.; Wisniewski, S.
Show abstract
Objective Cerebral palsy (CP) affects approximately 1 million Americans and 18 million individuals worldwide, yet contemporary US epidemiologic data remains limited. We aimed to use Cerebral Palsy Research Network (CPRN) clinical registry to describe demographics and clinical characteristics of individuals with CP across the US and determine associations with gross motor function and genetic etiology. Methods Registry subjects were included if they had clinician-confirmed CP and prospectively entered data for Gross Motor Function Classification System (GMFCS) Level, gestational age, genetic etiology, CP distribution, and tone/movement types. Logistic regression was used to determine which of these variables plus race, sex, ethnicity, and age were associated with GMFCS level and genetic etiology. Results A total of 9,756 children and adults with CP from 22 CPRN sites met inclusion criteria. Participants were predominantly White (73.0%), male (57.3%), non-Hispanic (87.8%), and younger than 18 years (73.7%). Most were classified as GMFCS levels I-III (55.6%), born preterm (52.8%), had spasticity (83.8%), and had quadriplegia (41.9%); 12.2% were identified as having a genetic etiology. Tone/movement types, CP distribution, and gestational age were significantly associated with both GMFCS level and genetic etiology (p<0.001). Compared to White individuals, Black individuals were more likely to have greater gross motor impairment (p<0.001). Conclusion In this large US cohort, clinical and demographic factors, including race, were associated with gross motor function and genetic etiology in CP. These findings highlight persistent disparities and demonstrate the value of a national clinical registry for informing prognostication, quality improvement efforts, and targeted genetic testing strategies.
Bannett, Y.; Pillai, M.; Huang, T.; Luo, I.; Gunturkun, F.; Hernandez-Boussard, T.
Show abstract
ImportanceGuideline-concordant care for young children with attention-deficit/hyperactivity disorder (ADHD) includes recommending parent training in behavior management (PTBM) as first-line treatment. However, assessing guideline adherence through manual chart review is time-consuming and costly, limiting scalable and timely quality-of-care measurement. ObjectiveTo evaluate the accuracy and explainability of large language models (LLMs) in identifying PTBM recommendations in pediatric electronic health record (EHR) notes as a scalable alternative to manual chart review. Design, Setting, and ParticipantsThis retrospective cohort study was conducted in a community-based pediatric healthcare network in California consisting of 27 primary care clinics. The study cohort included children aged 4-6 years with [≥] 2 primary care visits between 2020-2024 and ICD-10 diagnoses of ADHD or ADHD symptoms (n=542 patients). Clinical notes from the first ADHD-related visit were included. A stratified subset of 122 notes, including all cases with model disagreement, was manually annotated to assess model performance in identifying PTBM recommendations and rank model explanations. ExposuresAssessment and plan sections of clinical notes were analyzed using three generative large language models (Claude-3.5, GPT-4o, and LLaMA-3.3-70B) to identify the presence of PTBM recommendations and generate explanatory rationales and documentation evidence. Main Outcomes and MeasuresModel performance in identifying PTBM recommendations (measured by sensitivity, positive predictive value (PPV), and F1-score) and qualitative explainability ratings of model-generated rationales (based on the QUEST framework). ResultsAll three models demonstrated high performance compared to expert chart review. Claude-3.5 showed balanced performance (sensitivity=0.89, PPV=0.95, and F1-score=0.92) and ranked highest in explainability. LLaMA3.3-70B achieved sensitivity=0.91, PPV=0.89, and F1-score=0.90, ranking second for explainability. GPT-4o had the highest PPV [0.97] but lowest sensitivity [0.82], with an F1-score of 0.89 and the lowest explainability ranking. Based on classifications from the best-performing model, Claude-3.5, 26.4% (143/542) of patients had documented PTBM recommendations at their first ADHD-related visit. Conclusions and RelevanceLLMs can accurately extract guideline-concordant clinician recommendations for non-pharmacological ADHD treatment from unstructured clinical notes while providing clear explanations and supporting evidence. Evaluating model explainability as part of LLM implementation for medical chart review tasks can promote transparent and scalable solutions for quality-of-care measurement.
Hamida, H. B.; El Ouaer, M.; Abdelmoula, S.; El Ghali, M.; Bizid, M.; Chamtouri, I.; Monastiri, K.
Show abstract
BackgroundPatent ductus arteriosus (PDA) is a common and potentially serious cardiovascular condition in preterm infants, particularly those with low gestational age and birth weight. Its management remains controversial due to variability in screening, diagnostic criteria, and treatment strategies. This study aimed to evaluate risk factors, outcomes, and management strategies for PDA in preterm infants, and to identify predictors of clinical and echocardiographic response to therapy. MethodsWe conducted a retrospective cohort study over a 4-year period (2016-2019) in the neonatal intensive care unit (NICU) of a tertiary care center. All consecutive preterm infants admitted during the study period were eligible. Infants with echocardiographically confirmed PDA who received pharmacological treatment with intravenous paracetamol or ibuprofen were included in the analysis. Missing data were minimal and handled using available-case analysis. Statistical analyses included descriptive statistics, Pearsons chi-square test, and multivariable logistic regression. ResultsAmong 2154 preterm infants admitted to the NICU, 60 were diagnosed with PDA (incidence : 2.8%). The mean gestational age was 29 {+/-} 2.6 weeks, and the median birth weight was 1200 g. Respiratory distress occurred in 95% of cases, mainly due to hyaline membrane disease (86.7%). PDA was symptomatic in 80% of infants. First-line treatment resulted in clinical improvement in 77% and ductal closure in 83.3% of cases, most within 3 days. Predictors of successful closure included gestational age [≥] 28 weeks (OR = 5.9; 95% CI : 1.7-20.2) and antenatal corticosteroid exposure (OR = 1.2; 95% CI : 1.0-1.6). Overall mortality was 35% and was significantly higher in infants < 28 weeks (OR = 5.0; 95% CI : 2.4-10.3). Clinical improvement (OR = 3.7) and echocardiographic closure (OR = 4.5) after first-line treatment were associated with reduced mortality. ConclusionsPDA in preterm infants is associated with substantial morbidity and mortality, particularly in those born before 28 weeks of gestation. Early diagnosis, antenatal corticosteroid exposure, and timely pharmacological treatment may improve outcomes. Systematic echocardiographic screening in high-risk neonates should be considered.
Ramirez-Lopez, L.; Kang, P.
Show abstract
Irritable Bowel Syndrome (IBS) affects a substantial proportion of university students, yet its factors remain incompletely characterised in South Asian populations. We reanalysed a publicly available dataset of 550 Bangladeshi students from Hasan et al. (2025), conducting a data audit that identified implausible records, including males reporting menstrual symptoms, and reduced the analytic sample to 506 observations. Using Explainable Boosting Machines (EBMs), which capture non-linear effects and pairwise interactions without sacrificing interpretability, we found that psychological distress, elevated BMI and academic dissatisfaction were the strongest predictors of IBS (mean AUC = 0.852 across 100 stratified train-test splits). Critically, several findings diverged from the original logistic regression analysis. Physical activity showed a non-linear risk pattern only at high intensity, the association with gender was substantially weaker when we accounted for metabolic and psychological factors as well and malnourishment does not have a strong an impact as in the original study. These divergences likely arise because the machine-learning model captures non-linear effects and interactions that were not represented in the original regression specification. Our findings underscore the value of reanalysing existing datasets with methods suited to capturing complexity and highlight data quality verification as a necessary step in the secondary analysis.
Mutibwa, S.; Wandiembe, S.; Mbonye, K.; Nsimbe, D.
Show abstract
Background: Preterm births contribute to approximately 35% of neonatal deaths globally, with an estimated 13.4 million infants born prematurely each year. Despite this substantial burden, limited evidence exists on time to discharge and its determinants among preterm neonates admitted to Neonatal Intensive Care Units (NICUs), particularly in rural Ugandan settings. This study aimed to investigate time to discharge and associated factors among preterm neonates admitted to Kiwoko Hospital in Nakaseke District, Uganda. Methods: A retrospective cohort study was conducted using secondary data from Kiwoko Hospital on preterm neonates admitted to the Neonatal Intensive Care Unit (NICU) between 2020 and 2021 (n = 847). The cumulative incidence function was used to estimate the probability of discharge within 28 days of admission, accounting for competing events. A Fine and Gray sub-distribution hazard regression model was fitted to identify factors associated with time to discharge. Results: Of the 847 preterm admissions, 70.1% were discharged alive within 28 days. The median time to discharge was 14 days. The cumulative incidence of discharge by 28 days was 68%, accounting for competing events. During follow-up, 165 neonates did not complete the 28-day period, including 88 deaths. Factors significantly associated with time to discharge included place of delivery (SHR: 0.62; 95% CI: 0.53-0.73; p<0.001), maternal residence in other districts (SHR: 0.69; 95% CI: 0.48-0.99; p=0.044), extreme preterm (SHR: 0.05; 95% CI: 0.03-0.09; p<0.001), very preterm (SHR: 0.18; 95% CI: 0.14-0.25; p<0.001), moderate preterm (SHR: 0.59; 95% CI: 0.46-0.76; p<0.001), triplet births (SHR: 0.40; 95% CI: 0.23-0.68; p=0.001), 2-4 ANC visits (SHR: 0.70; 95% CI: 0.56-0.87; p=0.002), <=1 ANC visit (SHR: 0.64; 95% CI: 0.49-0.85; p=0.002), respiratory distress syndrome (SHR: 0.64; 95% CI: 0.48-0.74; p<0.001), and birth trauma (SHR: 2.62; 95% CI: 1.60-4.29; p<0.001). Conclusions: Respiratory distress syndrome, fewer antenatal care visits, out-of-district residence, and higher degrees of prematurity were associated with prolonged time to discharge among preterm neonates. Strengthening antenatal care utilization and improving access to quality neonatal care in underserved areas may enhance discharge outcomes.
Pandit, A. S.; Chaudri, T.; Chaudri, Z.; Vasilica, A. M.; Dhaliwal, J.; Sayar, Z.; Cohen, H.; Westwood, J. P.; Toma, A. K.
Show abstract
Background Venous thromboembolism (VTE) remains a major cause of perioperative morbidity in cranial neurosurgery, yet clinical practice varies widely, and formal guidelines are inconsistent. Understanding internationally sampled neurosurgical practice is essential for informing consensus and future trials. Methods An international, 2-stage cross-sectional, internet-based survey was conducted. Practising neurosurgeons performing elective adult cranial surgery were eligible. Descriptive statistics were used to summarise practice. Responses covered patterns of pre-operative haemostasis decision making, use and timing of mechanical and/or chemical prophylaxis, use of perioperative imaging prior to anticoagulation, and frequency of clinical assessment for VTE. Associations with geographical income status, subspecialty, and years post-certification were statistically tested. Practice heterogeneity was quantified and contextual influence was summarised using mean effect sizes across stratifying variables in order to determine domains of true equipoise. Results Of 585 responses, 456 (78%) met criteria for inclusion: representing 322 units across 78 countries (71% high-income). Thirteen per cent reported no departmental VTE plan; 23% followed no guidelines and 12% used multiple. Routine pre-operative testing almost universally included haemoglobin/platelets/haematocrit, with fibrinogen more common in high-income settings. Compared with high-income country respondents, low- and middle-income respondents reported higher haemoglobin transfusion thresholds (>90 g/dL; p<0.001) and shorter antiplatelet interruption (p[≤]0.03), and less frequent outpatient VTE assessment (p<0.001). Mechanical prophylaxis was common (TEDs 81%, IPC 62%), typically started pre- or intra-operatively. Among those completing the chemoprophylaxis section (n=310), 57% required a CT or MRI scan before LMWH which was then initiated on average 31.4 hours after surgery. 1% of respondents did not routinely use LMWH. Many clinical decisions demonstrated statistical equipoise ie. high heterogeneity with low contextual influence. Conclusion Peri-operative haemostasis and VTE prophylaxis practices in adult elective cranial neurosurgery vary substantially worldwide, with some decisions reflecting geographical or socioeconomic differences and many others reflecting true clinical equipoise rather than contextual determinants. By mapping contemporary real-world practice across diverse health-system contexts, this study provides a necessary empirical foundation for rational trial design and future guideline development.
Gollie, J.; Ryan, A. S.; Harris-Love, M. O.; Kokkinos, P.; Scholten, J.; Pugh, R. J.; Hazel, C. G.; Blackman, M. R.
Show abstract
Physical inactivity is common in chronic kidney disease (CKD) and is associated with poor neuromuscular and functional outcomes. Whether habitual physical activity (PA) influences adaptations to structured exercise in CKD remains unclear. This study examined if adaptations to combined flywheel resistance and aerobic exercise (FRE+AE) differed based on self-reported PA in Veterans with CKD stages 3 and 4. Twenty older male Veterans with CKD stages 3-4 (mean eGFR 37.9 +/- 10.2 mL/min/1.73 m2) were randomized to six weeks of FRE+AE (n=11) or health education (EDU; n=9). Participants were classified as meeting (Meets PA) or below (Low PA) weekly moderate intensity PA recommendations using the 7-day Physical Activity Recall. Outcomes included vastus lateralis muscle thickness (VL MT), knee extensor power output (60/s and 180/s), gait speed (GS), and five-repetition sit-to-stand (STS). FRE+AE increased VL MT (p=0.030), power output at 180/s (p=0.021), GS (p=0.001), and reduced STS time (p=0.012), with significant between-group differences versus EDU for VL MT (p=0.009) and GS (p=0.028). Low PA experienced greater increases in power output at 60/s (Hedges g; Low PA=0.44, Meets PA=0.25) and 180/s (Hedges g; Low PA=1.38, Meets PA=0.38) compared to Meets PA after FRE+AE. Conversely, Meets PA had greater improvements in GS (Hedges g; Low PA=0.93, Meets PA=1.29) and STS (Hedges g; Low PA=-0.72, Meets PA=-2.20) compared to Low PA. Six weeks of FRE+AE produced clinically meaningful neuromuscular and functional improvements in Veterans with CKD stages 3 and 4 irrespective of PA level, supporting FRE+AE as a feasible intervention in this population.